Alexander Litvinenko


Alexander Litvinenko  


​​        Research Themes    Projects   Teaching    Talks     

 Previous Website         
    Fast algorithms
    • Low-rank Tensor Arithmetics for stochastic PDEs
    • Hierarchical Domain Decomposition 
    • Hierarchical Matrices (HLIB, HLIBPro)
    • Numerical Methods for Multiscale Problems
    Pattern Recognition problems and data analysis
    • Classification Algorithms ( decision trees, its complexity and accuracy)
    • Cluster analysis

My current research interests are: efficient numerical methods for solving multi-parametric/stochastic PDEs; multi-linear algebra for analyzing of large data and low-rank/sparse tensor methods for uncertainty quantification; uncertainty quantification in inverse problems and data assimilation; and spatio-temporal statistics. My research is motivated by real-world applications in the areas of subsurface hydrology, oil recovery, vadoze zone hydrology, aerospace engineering, and so on. I have made some important contributions to: fast numerical techniques using low-rank tensor approximations to solve stochastic PDEs; inexpensive functional generalized polynomial approximation of classical Bayesian update formula; theory and numerical methods for approximation of large covariance matrices in spatial statistics using low-rank concepts; probabilistic/stochastic methods to model and quantify uncertainties in coefficients, parameters and computational geometry. I have collaborated with researchers from different areas including geoscientists, engineers, and statisticians.


I earned B.S. and M.S. degrees working on data analysis in Sobolev Institute of Mathematics at the Novosibirsk State University. During my undergraduate study, my research interests were related to the building of optimal decision trees for various areas of applications [22-27].


I did my PhD [21] in a group of Prof. Hackbusch at Max-Planck-Institut fuer Mathematik in Leipzig, Germany.  My Ph.D. research focus was a combination of domain decomposition methods and hierarchical matrices for solving elliptic PDEs with jumping and strongly oscillatory coefficients [14]. This results in the hierarchical domain decomposition (HDD) method, which can be used for partial evaluation of the discrete solution of an elliptic problem. The resulting method constructs the discrete solution operator for each node of the hierarchical/recursive domain decomposition tree. This operator is applied to the given boundary values and a source term and evaluate the solution on the interface/skeletons. The HDD method allows computing the solution on different scales and truncating components related to small subdomains. The HDD approach is, in particular, well suited for problems with oscillatory coefficients where reduced-order modelling for representing the small oscillations is needed. Matrix operations (e.g., matrix-matrix product, matrix inverse, the Schur complement) are done in the hierarchical matrix format with the computational cost O(nlog n), where n is the number of unknowns.


Low-rank/sparse tensor methods


My research on low-rank/sparse tensor methods has several directions. Below, I describe two of them.


In my work [15], we introduce new methods for the analysis of high dimensional data in tensor formats, where the underlying data come from the stochastic elliptic boundary value problem. After the discretization and the parameterization of the random field, the obtained high dimensional operator can be approximated via sums of elementary tensors. This tensor’s representation can effectively be used for computing different quantities of interest, such as maximum norm, level sets, and cumulative distribution function. The basic concept of the data analysis in high dimensions is discussed on tensors represented in the canonical format, however the approach can easily be used in other tensor formats. We describe efficient iterative algorithms for computing the characteristic and sign functions as well as point-wise inverse in the canonical tensor format. Since during majority of algebraic operations as well as during iteration steps the representation rank grows up, we use lower-rank approximation and inexact recursive iteration schemes.


In my paper [7], we describe an efficient approximation of the stochastic Galerkin matrix, which stems from a stationary diffusion equation. The uncertain permeability coefficient is assumed to be a log-normal random field with given covariance and mean functions. The approximation is done in the canonical tensor format and then compared numerically with the tensor train and hierarchical tensor formats. We show that under additional assumptions the approximation error depends only on the smoothness of the covariance function and does not depend either on the number of random variables nor the degree of the multivariate Hermite polynomials.


Other works are [3,8,10,14,17,19,20].


Uncertainty Quantification in Forward Simulation


Uncertainty quantification is important in many applications. In my research, I have focused on designing efficient uncertainty quantification techniques and their applications [3,5,8,9]. I describe some of my research below.


In my work [3], we apply the tensor train (TT) decomposition to construct the tensor product polynomial chaos expansion (PCE) of a random field in order to solve the stochastic PDE and to compute some quantities of interest. To construct TT approximation of the random diffusion coefficient, we develop a new block TT cross algorithm. The new method is conceptually similar to the adaptive cross approximation in the TT format but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. In addition, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its postprocessing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required.

In my work [5], we quantify uncertainty in aerodynamic simulations. In this paper, we compare five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by 9 independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost.


In the work [9], I study the propagation of uncertainties in parameters and airfoil geometry to the solution. Typical examples of uncertain parameters are the angle of attack and the Mach number. The discretization techniques, which we used here are the Karhunen-Loève and the polynomial chaos expansions. To integrate high-dimensional integrals in probabilistic space we use Monte Carlo simulations and collocation methods on sparse grids. To reduce storage requirement and computing time, we demonstrate an algorithm for data compression, based on a low-rank approximation of realizations of random fields. This low-rank approximation allows us an efficient post-processing (e.g. computation of the mean value, variance, etc) with a linear complexity and with drastically reduced memory requirements.


Other works are [3,5,6,9,10,19].


Bayesian Update and Uncertainty Quantification in Inverse Problems


When a mathematical or computational model is used to analyze some systems, it is common that some parameters or fields in the model are not known. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes’ theory is a proper mathematical tool for identifying the model parameters and associated uncertainties. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. In my work [1], we study the use of Bayes’ theory in an actual numerical procedure, and discuss various numerical approximations.


In a Bayesian setting, inverse problems and uncertainty quantification (UQ) --- the propagation of uncertainty through a computational (forward) model --- are strongly connected. In the form of conditional expectation, the Bayesian update becomes computationally attractive. In [2], we give a detailed account of this approach via conditional approximation, various approximations, and the construction of filters. Together with a functional or spectral approach for the forward UQ there is no need for time-consuming and slowly convergent Monte Carlo sampling. The developed sampling-free non-linear Bayesian update in the form of a filter is derived from the variational problem associated with conditional expectation. We compare the linear and nonlinear Bayesian update in form of a filter on some examples.


In [11], we present an approach to a probabilistic interpretation of inverse problems in which unknown quantities are represented by random fields. The description of the introduced random fields is given in a “white noise” framework, which enables us to solve the stochastic forward problem through Galerkin projection onto polynomial chaos. With the help of such a representation the probabilistic identification problem is cast in a polynomial chaos expansion setting and the Bayes’ linear form of updating. By introducing the Hermite algebra this becomes a purely algebraic way of computing the posterior, which is comparatively inexpensive to evaluate. In addition, we show that the well-known Kalman filter is the low order part of this update. The proposed method is tested on a stationary diffusion equation with prescribed source terms, characterized by an uncertain conductivity parameter, which is then identified from limited and noisy data obtained by measurements.

Other works are [1,2,6,11,13,16,18].


Hierarchical/scalable/parallel methods


In [19], we prove that H-matrix approximation of a covariance matrix exists. We apply the H-matrix technique to approximate random fields with as few random variables as possible by computing the Karhunen–Loève expansion (KLE). The cost of solving corresponding large eigenvalue problem is drastically reduced to O(n logn), where n is the number of grid nodes.


In [20], we propose and analyse a new hierarchical Cholesky (H-Cholesky) factorization based preconditioner for iterative solving the elliptic equations with highly jumping coefficients arising in the so-called skin-modeling problem. First, we construct the block-diagonal approximation to the FE stiffness matrix, which is well suited to the “perforated” structure of the coefficients. Then, we apply the H-Cholesky factorization of this block-diagonal matrix as a preconditioner in the PCG iteration. It is shown that the new preconditioner is robust with respect to jumps in the coefficients and it requires less storage and computing time than the standard H-Cholesky factorization


In [4] we work on parallelizing hierarchical matrix technique on GPU.


Fast Methods in Spatial Statistics


In [14], we use separability of certain covariance functions and low-rank tensor technique to speed up Kriging algorithms based on FFT. For separable (factorized) covariance functions, the results are exact, and nonseparable covariance functions can be approximated well through sums of separable components. We achieve speedup factors up to 108 (eight orders of magnitude), and we can treat problem sizes of up to 15 trillion and two quadrillion estimation points for Kriging and spatial design, respectively, within seconds on a contemporary desktop computer. The current study assumes second-order stationarity and simple Kriging on a regular, equispaced lattice.

I am currently finishing a paper “Likelihood Approximation with Hierarchical Matrices for Large Spatial Datasets” with Prof. Marc Genton (Spatial Statistics and Data Analysis) and Prof. Ying Sun (Environmental Statistics). We are looking for the maximum of the log-likelihood function, which non-linearly depends on the covariance matrix. The computational issue is that this covariance matrix can be very large and depends on 1-5 unknown hyper-parameters. On each iteration of a maximization algorithm, the Cholesky factorisation and matrix determinant have to be computed. We are developing an efficient numerical method to speed up these intensive computations.


Future plans

During 2016-2017, I plan the following research projects. Each research project is briefly described with a possible publication information.

[R1] M. Espig, W. Hackbusch, A. Litvinenko and H.G. Matthies, An Efficient Method for the Computation of the Stochastic Galerkin Projection by Means of Tensor Format Representations

In [R1], we develop a new low-rank tensor method to solve multi-dimensional stochastic elliptic equation.



[R2] H.G. Matthies, E. Zander, B.V. Rosić, A. Litvinenko, Parameter estimation via conditional expectation: a Bayesian inversion, Advanced Modeling and Simulation in Engineering Sciences 3 (1), 24, 2016

In this work, we develop further inexpensive non-linear surrogates to approximate Bayesian Update formula  to avoid expensive Markov Chain Monte Carlo sampling. We will compare our results with the results obtained by the classical Markov Chain Monte Carlo method (in collaboration with Habib Najm and Kody Law).



 [R3] A. Litvinenko, Y. Marzouk,  H. G. Matthies, M. Scavino, A. Spantini, Bayesian update of a random variable via its characteristic function

In [R3], we derive an equivalent to the classical Bayesian update formula for probability characteristic functions (Fourier Transformations of probability densities). The low-rank properties of the characteristic function are combined with the multi-dimensional Fourier Transformation as well as with the polynomial chaos expansion. This combination may allow us to find a better low-rank representation and keep all operations in a low-rank tensor format.


 [R4] A. Litvinenko, E. Uysal, H. Ulku, J. Oppelstrup, R. Tempone, H. Bagci, Computation of Electromagnetic Fields Scattered From Dielectric Objects of Uncertain Shapes Using a Multilevel Monte Carlo Scheme.

Simulators, which are capable to compute scattered fields from objects of uncertain shapes are highly useful in electromagnetics and photonics, where device designs are typically subject to fabrication tolerances. Knowledge of statistical variations in scattered fields is useful in ensuring error-free functioning of devices. We applied multi-level Monte Carlo to quantify the uncertainties in the quantities of interest.

[R5]A. Litvinenko, H. Najm,  H.G. Matthies, Data free inference of uncertain parameters in chemical models by the (non)-linear Bayesian Update and MCMC,

In [R5], apply our Bayesian surrogate models for an inference procedure for estimation of uncertain model parameters for a chemical model of methane-air ignition. We also compare our non-linear Bayesian update method with the nested Monte Carlo Markov Chain offered by H. Najm et. al. in 2012.


​​​​​​​ Literature

1.     H.G. Matthies, E. Zander, B.V. Rosić, A. Litvinenko, Parameter estimation via conditional expectation: a Bayesian inversion, Advanced Modeling and Simulation in Engineering Sciences 3 (1), 24, 2016.

2.     H.G. Matthies, E. Zander, B. Rosic, A. Litvinenko, O. Pajonk, Inverse Problems in a Bayesian Setting, Chapter in Computational Methods for Solids and Fluids, Vol. 41 of series Comp. Meth. in Appl. Sciences, pp 245-286, 2016.

3.     S. Dolgov, B. N. Khoromskij, A. Litvinenko, H. G. Matthies, Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format, IAM/ASA J. Uncertainty Quantification 3(1), pp 1109-1135, 2015.

4.     W. Boukaram, H. Ltaief, A. Litvinenko, A. Abdelfattah, D. Keyes, Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units, extended abstract, International Computational Science and Engineering Conference, May 11-12, 2015, Qatar.

5.     D. Liu, A. Litvinenko, C. Schillings, V. Schulz, Quantification of airfoil geometry-induced aerodynamic uncertainties - comparison of approaches, submitted to SIAM/ASA J.  of Uncertainty Quantification, 2015.

6.     Litvinenko, H. G. Matthies. Inverse problems and uncertainty quantification, arXiv preprint arXiv:1312.5048.

7.     M. Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, P. Wähnert, Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, Computers & Mathematics with Applications 67 (4), 818-829, 2014.

8.     L. Giraldi, A. Litvinenko, D. Liu, H.G. Matthies, A. Nouy, To Be or Not to Be Intrusive? The Solution of Parametric and Stochastic Equations---the “Plain Vanilla” Galerkin Case,  SIAM Journal on Scientific Computing 36 (6), A2720-A2744, 2014.

9.     Litvinenko, H.G. Matthies, Numerical Methods for Uncertainty Quantification and Bayesian update in Aerodynamics, chapter in the book Management and Minimisation of Uncertainties and Errors in Numerical Aerodynamics – Results of the German collaborative project MUNA, pp. 267-283, Editors: B. Eisfeld, H. Barnewitz, W. Fritz, F. Thiele, Springer, 2013.

10.   Litvinenko, H.G. Matthies, T. A. El-Moselhy, Sampling and Low-Rank Tensor Approximation of the Response Surface, Proceedings Monte Carlo and Quasi-Monte Carlo Methods 2012, edited by Josef Dick, Frances Y. Kuo, Gareth W. Peters, and Ian H. Sloan, 16 pages, Springer-Verlag.

11.   B. Rosic, A. Litvinenko, O. Pajonk, H. G. Matthies, Sampling-free  linear Bayesian update of polynomial chaos representations, J. Comp. Physics, 231(2012), pp 5761-5787.

12.   A Litvinenko, H. G. Matthies, Uncertainty Quantification and NonLinear Bayesian Update of PCE Coefficients, PAMM 13 (1), 379-380.

13.   O. Pajonk, B. Rosic, A. Litvinenko, H. G. Matthies, A Deterministic Filter for non Gaussian Bayesian Estimation, Physica D: Nonlinear Phenomena, 241(2012), pp.775-788.

14.   W. Nowak, A. Litvinenko, Kriging accelerated by orders of magnitude: combining low-rank covariance approximations with FFT-techniques, Mathematical Geosciences, 01/2013; 45:411-435. DOI: 10.1007/s11004-013-9453-6.

15.   M. Espig, W. Hackbusch, A. Litvinenko, H. G. Matthies, E. Zander, Efficient Analysis of High Dimensional Data in Tensor Formats, Springer Lecture Note series (88) for Computational Science and Engineering, pp 31-56, 2013.

16.   B. Rosic, A. Kucerová, J. Sykora, O. Pajonk, A. Litvinenko, H. G. Matthies: Parameter Identification in a Probabilistic Setting, Engineering Structures (2013), Vol. 50, pp 179–196, doi:10.1016/j.engstruct.2012.12.029.

17.   H.G. Matthies, A. Litvinenko, O. Pajonk, B.V. Rosić, E. Zander, Parametric and Uncertainty Computations with Tensor Product Representations, Book “Uncertainty Quantification in Scientific Computing”, IFIP Advances in Information and Communication Technology Vol. 377, 2012, pp 139- 150,doi:10.1007/978-3-642-32677-6.

18.   B. Rosic, H.G. Matthies, A. Litvinenko, O. Pajonk, A. Kucerova, and J. Sykora, Bayesian Updating of uncertainties in the description of heat and moisture transport in heterogenous materials, International Conference on Adaptive Modeling and Simulation ADMOS 2011, D. Aubry and P. Diez (Eds), pp. 415-423, 2011.

19.   B.N. Khoromskij, A. Litvinenko, H. G. Matthies, Application of hierarchical matrices for computing the Karhunen-Loeve expansion, Springer, Computing, 84:49-67, 2009.

20.   B.N. Khoromskij, A. Litvinenko, Domain decomposition based H-matrix preconditioner for the skin problem in 2D and 3D. Domain Decomposition Methods in Science and Engineering XVII Lecture Notes in Computational Science and Engineering, 2008, Volume 60, II, pp 175-182.

21.   Litvinenko, Application of H-matrices for solving multiscale problems, PhD thesis, Leipzig University, April 2006.

22.   V. Berikov, G.S. Lbov, A. Litvinenko,  Discrete recognition problem with a randomized decision function.  Pattern Recognition and Image Analysis, Vol.14/2, pp 211-221, 2004.

23.   V. Berikov, A. Litvinenko Choice of optimal decision tree complexity in discrete pattern recognition problem / Isskustvennii intellect. 2, 2004, pp. 17-21, [Russian].

24.   V. Berikov, A. Litvinenko, The influence of prior knowledge on the expected performance of a classifier.  Pattern Recognition Letters, Vol. 24/15, pp  2537-2548, 2003.

25.   V. Berikov, G.S. Lbov, A. Litvinenko. Evaluation of recognition performance for discrete classifier.  The 6-th German-Russian Workshop Pattern Recognition and Image Understanding ORGW2003. Katun, Altai Region, Russia. pp 34-37.

26.   V. Berikov, A. Litvinenko. Lecture notes: Methods for statistical data analysis with decision trees, Novosibirsk, Russia, 2002,

27.   V. Berikov, A. Litvinenko. On the evaluation of discrete classifiers, Computer Data Analysis and Modeling. Robustness and Computer Intensive Methods. Proceedings of the Sixth International Conference (September 10-14), Minsk, Belarus 2001, proceedings, pp 10-15.